17 research outputs found

    Estimator stability analysis in SLAM

    Get PDF
    IFAC Symposium on Intelligent Autonomous Vehicles (IAV), 2004, Lisboa (Portugal)This work presents an analysis of the state estimation error dynamics for a linear system within the Kalman filter based approach to Simultaneous Localization and Map Building. Our objective is to demonstrate that such dynamics is marginally stable. The paper also presents the necessary modifications required in the observation model, in order to guarantee zero mean stable error dynamics. Simulations for a one-dimensional robot and a planar vehicle are presented.This work was supported by the project 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063).Peer Reviewe

    Stochastic State Estimation for Simultaneous Localization and Map Building in Mobile Robotics

    Get PDF
    En Cutting Edge Robotics, 223-242. Advanced Robotic Systems Press, 2005.The study of stochastic models for Simultaneous Localization and Map Building (SLAM) in mobile robotics has been an active research topic for over fifteen years. Within the Kalman filter (KF) approach to SLAM, seminal work (Smith and Cheeseman, 1986) suggested that as successive landmark observations take place, the correlation between the estimates of the location of such landmarks in a map grows continuously. This observation was later ratified (Dissanayake et al., 2001) with a proof showing that the estimated map converges monotonically to a relative map with zero uncertainty. They also showed how the absolute accuracy of the map reaches a lower bound defined only by the initial vehicle uncertainty, and proved it for a one-landmark vehicle with no process noise. From an estimation theoretic point of view, we address these results as a consequence of partial observability. We show that error free reconstruction of the map state vector is not possible with typical measurement models, regardless of the vehicle model chosen, and show experimentally that the expected error in state estimation is proportional to the number of landmarks used. Error free reconstruction is only possible once full observability is guaranteed.This work was supported by projects: 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063), 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    Research at the learning and vision mobile robotics group 2004-2005

    Get PDF
    Spanish Congress on Informatics (CEDI), 2005, Granada (España)This article presents the current trends on wheeled mobile robotics being pursued at the Learning and Vision Mobile Robotics Group (IRI). It includes an overview of recent results produced in our group in a wide range of areas, including robot localization, color invariance, segmentation, tracking, audio processing and object learning and recognition.This work was supported by projects: 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063), 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    2-D visual servoing for MARCO

    Get PDF
    This report presents an image-based visual servoing for our mobile platform called MARCO. The kinematic model of MARCO has been obtained in this work. An implementation has been developed using camera calibration parameters. Simulations and experimental results are presented here.This work was supported by the project 'Navegación basada en visión de robots autónomos en entornos no estructurados.' (070-724)

    Autonomous single camera exploration

    Get PDF
    Presentado a la 2ª Jornada de Recerca en Automàtica, Visió i Robòtica (AVR/2006) celebrada en Barcelona (España).In this paper we present an active exploration strategy for a mobile robot navigating in 3D. The aim is to control a moving robot that autonomously builds a visual feature map while at the same time optimises its localisation in this map. The technique chooses the most appropriate commands maximising the information gain between prior states and measurements, while performing 6DOF bearing only SLAM at video rate. Maximising the mutual information helps the vehicle avoid ill-conditioned measurements appropriate to bearing-only SLAM. To validate the approach, extensive simulations over rugged terrain have been performed. Moreover, experimental results are shown for the technique being tested with a synchro-drive mobile robot platform.This work was supported by projects: 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929), 'Perception, action & cognition through learning of object-action complexes.' (4915).This research is supported by the Spanish Ministry of Education and Science by an FPI scholarship to TVC, under project DPI 2004-5414 to AS, and by project TIC 2003-09291 and the EU PACO-PLUS project FP6-2004-IST-4-27657 to JAC.Peer Reviewe

    Multirobot C-SLAM: Simultaneous localization, control, and mapping

    Get PDF
    ICRA Workshop on Network Robot Systems (ICRA NRS), 2005, Barcelona (España)This paper is about closing the low level control loop during Multirobot Simultaneous Localization and Map Building from an estimation-control theoretic viewpoint. We present a multi-vehicle control strategy that uses the state estimates generated from the SLAM algorithm as input to a multi-vehicle controller. Given the separability between optimal state estimation and regulation, we show that the tracking error does not influence the estimation performance of a fully observable EKF based multirobot SLAM implementation, and vice versa, that estimation errors do not undermine controller performance. Furthermore, both the controller and estimator are shown to be asymptotically stable. The feasibility of using this technique to close the perception-action loop during multirobot SLAM is validated with simulation results.This work was supported by the project 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    Conditions for suboptimal filter stability in SLAM

    Get PDF
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2004, Sendai (Japón)In this article, we show marginal stability in SLAM, guaranteeing convergence to a non-zero mean state error estimate bounded by a constant value. Moreover, marginal stability guarantees also convergence of the Riccati equation of the one-step ahead state error covariance to at least one psd steady state solution. In the search for real time implementations of SLAM, covariance inflation methods produce a suboptimal filter that eventually may lead to the computation of an unbounded state error covariance. We provide tight constraints in the amount of decorrelation possible, to guarantee convergence of the state error covariance, and at the same time, a linear-time implementation of SLAM.This work was supported by the project 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063).Peer Reviewe

    Large scale visual odometry using stereo vision

    Get PDF
    Trabajo presentado al ACRA 2009 celebrado en Sydney (Australia) del 2 al 4 de diciembre.This paper presents a system for egomotion estimation using a stereo head camera. The camera motion estimation is based on features tracked along a video sequence. The system also estimates the tridimensional geometry of the environment by fusing the visual information from multiple views. Furthermore, the paper presents comparisons between two different algorithms. The first one is by applying triangulation to 3D points. Motion estimation using 3D points suffers from the problem of nonisotropic noise due to the large uncertainty in depth estimation. To deal with this problem we present results with a second approach that works directly in the disparity space. Experimental results using a mobile platform are presented. The experiments cover long distances in urban-like environments with the presence of dynamic objects. The system presented is part of a bigger project involving autonomous navigation using vision only.This work has been supported in part by CONACYT and SEP Mexico, the Rio Tinto Centre for Mine Automation, the ARC Centre of Excellence programme, funded by the Australian Research Council (ARC) and the New South Wales State Government, and the Spanish Ministry of Innovation and Science.Peer Reviewe
    corecore